We present a noisy channel generative model of two sequences, for example text and speech, which enables uncovering the association between the two modalities when limited paired data is available. To address the intractability of the exact model under a realistic data setup, we propose a variational inference approximation. To train this variational model with categorical data, we propose a KL encoder loss approach which has connections to the wake-sleep algorithm. Identifying the joint or conditional distributions by only observing unpaired samples from the marginals is only possible under certain conditions in the data distribution and we discuss under what type of conditional independence assumptions that might be achieved, which guides the architecture designs. Experimental results show that even tiny amount of paired data (5 minutes) is sufficient to learn to relate the two modalities (graphemes and phonemes here) when a massive amount of unpaired data is available, paving the path to adopting this principled approach for all seq2seq models in low data resource regimes.
translated by 谷歌翻译
这项工作探讨了在不存在的人类发声声中合成语音的任务。我们称之为此任务“扬声器生成”,并呈现Tacosawn,一个在此任务中竞争地执行的系统。Tacosawn是一种基于重复的关注文本到语音模型,了解备用空间的发行版,这使得新颖和各种扬声器采样。我们的方法易于实现,并且不需要从扬声器ID系统转移学习。我们呈现客观和主观指标,用于评估此任务的表现,并证明我们所提出的客观指标与人类对扬声器相似性相关联。我们的演示页面上有音频样本。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Over the past few years, large knowledge bases have been constructed to store massive amounts of knowledge. However, these knowledge bases are highly incomplete, for example, over 70% of people in Freebase have no known place of birth. To solve this problem, we propose a query-driven knowledge base completion system with multimodal fusion of unstructured and structured information. To effectively fuse unstructured information from the Web and structured information in knowledge bases to achieve good performance, our system builds multimodal knowledge graphs based on question answering and rule inference. We propose a multimodal path fusion algorithm to rank candidate answers based on different paths in the multimodal knowledge graphs, achieving much better performance than question answering, rule inference and a baseline fusion algorithm. To improve system efficiency, query-driven techniques are utilized to reduce the runtime of our system, providing fast responses to user queries. Extensive experiments have been conducted to demonstrate the effectiveness and efficiency of our system.
translated by 谷歌翻译
Recent improvements in KG-to-text generation are due to additional auxiliary pre-training tasks designed to give the fine-tune task a boost in performance. These tasks require extensive computational resources while only suggesting marginal improvements. Here, we demonstrate that by fusing graph-aware elements into existing pre-trained language models, we are able to outperform state-of-the-art models and close the gap imposed by additional pre-training tasks. We do so by proposing a mask structure to capture neighborhood information and a novel type encoder that adds a bias to the graph-attention weights depending on the connection type. Experiments on two KG-to-text benchmark datasets show our models are competitive while involving fewer parameters and no additional pre-training tasks. By formulating the problem as a framework, we can interchange the various proposed components and begin interpreting KG-to-text generative models based on the topological and type information found in a graph.
translated by 谷歌翻译
贝叶斯优化(Bayesopt)是查询有效连续优化的黄金标准。然而,决策变量的离散,高维质阻碍了其对药物设计的采用。我们开发了一种新方法(LAMBO),该方法通过判别性多任务高斯流程主管共同训练Denoising AutoCododer,从而使基于梯度的多目标采集功能优化了自动装编码器的潜在空间。这些采集功能使Lambo能够在多个设计回合上平衡探索探索折衷方案,并通过在Pareto边境上的许多不同地点优化序列来平衡客观权衡。我们在两个小分子设计任务上评估了兰博,并引入了优化\ emph {在硅}和\ emph {Inter {In Betro}特性的新任务。在我们的实验中,兰博的表现优于遗传优化者,并且不需要大量的预处理,表明贝叶诺斯对生物序列设计是实用且有效的。
translated by 谷歌翻译
我们提出了一种具有多组特征的监督学习的新方法(“视图”)。合作学习将通常的平方错误丢失与“协议”惩罚相结合,以鼓励从不同数据视图中的预测同意。通过改变协议罚款的重量,我们得到了包括众所周知的早期和晚期融合方法的解决方案。合作学习以自适应方式选择协议(或融合)的程度,使用验证集或交叉验证来估计测试设置预测误差。我们的拟合程序的一个版本是模块化的,其中可以选择适合不同数据视图的不同拟合机制(例如套索,随机森林,升压,神经网络)。在协同正规化线性回归的设置中,该方法将套索罚款与协议处罚相结合。当不同的数据视图共享某些潜在的关系时,该方法可以尤其强大,因为我们的目的是加强的一些基础关系,而每个视图都有其特殊的噪音,我们的目标是减少。我们说明了我们提出的模拟和实际数据示例的提出方法的有效性。
translated by 谷歌翻译
许多在世界上的许多语言的语言现有数据的非数字化书籍和文件锁定了。光学字符识别(OCR)可以用来产生数字化的文字,和以前的工作已经证明的是提高认识,精心资源较少语言的通用OCR系统的结果神经后校正方法的实用程序。然而,这些方法依赖于手工辅助校正后的数据,相对于非注释原始图像需要被数字化,其是相对稀少。在本文中,我们提出了一种半监督学习方法,使得它可以利用这些原始图像,以提高性能,特别是通过运用自我训练,其中模型迭代自身输出训练有素的技术。此外,为了执行在识别词汇的一致性,我们引入一个词法感知解码方法,该方法增强了神经后修正模型与从所识别的文本构成的基于计数的语言模型,使用加权有限状态自动机中实现(WFSA)对于高效和有效的解码。四种濒危语言的结果证明了该方法的效用,具有15-29%的相对误差减少,我们在哪里找到的自我培训和实现持续改善词法感知解码所必需的组合。数据和代码可在https://shrutirij.github.io/ocr-el/。
translated by 谷歌翻译
知识蒸馏是一种培训小型学生网络的流行技术,以模仿更大的教师模型,例如网络的集合。我们表明,虽然知识蒸馏可以改善学生泛化,但它通常不得如此普遍地工作:虽然在教师和学生的预测分布之间,甚至在学生容量的情况下,通常仍然存在令人惊讶的差异完美地匹配老师。我们认为优化的困难是为什么学生无法与老师匹配的关键原因。我们还展示了用于蒸馏的数据集的细节如何在学生与老师匹配的紧密关系中发挥作用 - 以及教师矛盾的教师并不总是导致更好的学生泛化。
translated by 谷歌翻译